Social Media Hate Speech Detection Using Explainable Artificial Intelligence (XAI)
نویسندگان
چکیده
Explainable artificial intelligence (XAI) characteristics have flexible and multifaceted potential in hate speech detection by deep learning models. Interpreting explaining decisions made complex (AI) models to understand the decision-making process of these model were aims this research. As a part research study, two datasets taken demonstrate using XAI. Data preprocessing was performed clean data any inconsistencies, text tweets, tokenize lemmatize text, etc. Categorical variables also simplified order generate dataset for training purposes. Exploratory analysis on uncover various patterns insights. Various pre-existing applied Google Jigsaw such as decision trees, k-nearest neighbors, multinomial naïve Bayes, random forest, logistic regression, long short-term memory (LSTM), among which LSTM achieved an accuracy 97.6%. methods LIME (local interpretable model—agnostic explanations) HateXplain dataset. Variants BERT (bidirectional encoder representations from transformers) + ANN (artificial neural network) with 93.55% MLP (multilayer perceptron) 93.67% created achieve good performance terms explainability ERASER (evaluating rationales simple English reasoning) benchmark.
منابع مشابه
Detecting Hate Speech in Social Media
In this paper we examine methods to detect hate speech in social media, while distinguishing this from general profanity. We aim to establish lexical baselines for this task by applying supervised classification methods using a recently released dataset annotated for this purpose. As features, our system uses character n-grams, word n-grams and word skip-grams. We obtain results of 78% accuracy...
متن کاملBuilding Explainable Artificial Intelligence Systems
As artificial intelligence (AI) systems and behavior models in military simulations become increasingly complex, it has been difficult for users to understand the activities of computer-controlled entities. Prototype explanation systems have been added to simulators, but designers have not heeded the lessons learned from work in explaining expert system behavior. These new explanation systems a...
متن کاملExplainable Artificial Intelligence for Training and Tutoring
This paper describes an Explainable Artificial Intelligence (XAI) tool that allows entities to answer questions about their activities within a tactical simulation. We show how XAI can be used to provide more meaningful after-action reviews and discuss ongoing work to integrate an intelligent tutor into the XAI framework.
متن کاملExplainable Artificial Intelligence via Bayesian Teaching
Modern machine learning methods are increasingly powerful and opaque. This opaqueness is a concern across a variety of domains in which algorithms are making important decisions that should be scrutable. The explainabilty of machine learning systems is therefore of increasing interest. We propose an explanation-byexamples approach that builds on our recent research in Bayesian teaching in which...
متن کاملAutomated Reasoning for Explainable Artificial Intelligence
Reasoning and learning have been considered fundamental features of intelligence ever since the dawn of the field of artificial intelligence, leading to the development of the research areas of automated reasoning and machine learning. This paper discusses the relationship between automated reasoning and machine learning, and more generally between automated reasoning and artificial intelligenc...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Algorithms
سال: 2022
ISSN: ['1999-4893']
DOI: https://doi.org/10.3390/a15080291